Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 218
Filtrar
1.
J Gambl Stud ; 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724824

RESUMO

Computer technology has long been touted as a means of increasing the effectiveness of voluntary self-exclusion schemes - especially in terms of relieving gaming venue staff of the task of manually identifying and verifying the status of new customers. This paper reports on the government-led implementation of facial recognition technology as part of an automated self-exclusion program in the city of Adelaide in South Australia-one of the first jurisdiction-wide enforcements of this controversial technology in small venue gambling. Drawing on stakeholder interviews, site visits and documentary analysis over a two year period, the paper contrasts initial claims that facial recognition offered a straightforward and benign improvement to the efficiency of the city's long-running self-excluded gambler program, with subsequent concerns that the new technology was associated with heightened inconsistencies, inefficiencies and uncertainties. As such, the paper contends that regardless of the enthusiasms of government, tech industry and gaming lobby, facial recognition does not offer a ready 'technical fix' to problem gambling. The South Australian case illustrates how this technology does not appear to better address the core issues underpinning problem gambling, and/or substantially improve conditions for problem gamblers to refrain from gambling. As such, it is concluded that the gambling sector needs to pay close attention to the practical outcomes arising from initial cases such as this, and resist industry pressures for the wider replication of this technology in other jurisdictions.

2.
MedComm (2020) ; 5(4): e526, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38606361

RESUMO

Malnutrition is a prevalent and severe issue in hospitalized patients with chronic diseases. However, malnutrition screening is often overlooked or inaccurate due to lack of awareness and experience among health care providers. This study aimed to develop and validate a novel digital smartphone-based self-administered tool that uses facial features, especially the ocular area, as indicators of malnutrition in inpatient patients with chronic diseases. Facial photographs and malnutrition screening scales were collected from 619 patients in four different hospitals. A machine learning model based on back propagation neural network was trained, validated, and tested using these data. The model showed a significant correlation (p < 0.05) and a high accuracy (area under the curve 0.834-0.927) in different patient groups. The point-of-care mobile tool can be used to screen malnutrition with good accuracy and accessibility, showing its potential for screening malnutrition in patients with chronic diseases.

3.
Diabetes Metab Syndr ; 18(4): 103003, 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38615568

RESUMO

AIM: To build a facial image database and to explore the diagnostic efficacy and influencing factors of the artificial intelligence-based facial recognition (AI-FR) system for multiple endocrine and metabolic syndromes. METHODS: Individuals with multiple endocrine and metabolic syndromes and healthy controls were included from public literature and databases. In this facial image database, facial images and clinical data were collected for each participant and dFRI (disease facial recognition intensity) was calculated to quantify facial complexity of each syndrome. AI-FR diagnosis models were trained for each disease using three algorithms: support vector machine (SVM), principal component analysis k-nearest neighbor (PCA-KNN), and adaptive boosting (AdaBoost). Diagnostic performance was evaluated. Optimal efficacy was achieved as the best index among the three models. Effect factors of AI-FR diagnosis were explored with regression analysis. RESULTS: 462 cases of 10 endocrine and metabolic syndromes and 2310 controls were included into the facial image database. The AI-FR diagnostic models showed diagnostic accuracies of 0.827-0.920 with SVM, 0.766-0.890 with PCA-KNN, and 0.818-0.935 with AdaBoost. Higher dFRI was associated with higher optimal area under the curve (AUC) (P = 0.035). No significant correlation was observed between the sample size of the training set and diagnostic performance. CONCLUSIONS: A multi-ethnic, multi-regional, and multi-disease facial database for 10 endocrine and metabolic syndromes was built. AI-FR models displayed ideal diagnostic performance. dFRI proved associated with the diagnostic performance, suggesting inherent facial features might contribute to the performance of AI-FR models.

4.
Bioengineering (Basel) ; 11(4)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38671805

RESUMO

BACKGROUND: Facial recognition systems utilizing deep learning techniques can improve the accuracy of facial recognition technology. However, it remains unclear whether these systems should be available for patient identification in a hospital setting. METHODS: We evaluated a facial recognition system using deep learning and the built-in camera of an iPad to identify patients. We tested the system under different conditions to assess its authentication scores (AS) and determine its efficacy. Our evaluation included 100 patients in four postures: sitting, supine, and lateral positions, with and without masks, and under nighttime sleeping conditions. RESULTS: Our results show that the unmasked certification rate of 99.7% was significantly higher than the masked rate of 90.8% (p < 0.0001). In addition, we found that the authentication rate exceeded 99% even during nighttime sleeping. Furthermore, the facial recognition system was safe and acceptable for patient identification within a hospital environment. Even for patients wearing masks, we achieved a 100% success rate for authentication regardless of illumination if they were sitting with their eyes open. CONCLUSIONS: This is the first systematical study to evaluate facial recognition among hospitalized patients under different situations. The facial recognition system using deep learning for patient identification shows promising results, proving its safety and acceptability, especially in hospital settings where accurate patient identification is crucial.

5.
J Stomatol Oral Maxillofac Surg ; : 101843, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38521241

RESUMO

OBJECTIVES: This work aims to introduce a Python-based algorithm and delve into the recent paradigm shift in Maxillofacial Surgery propelled by technological advancement. The provided code exemplifies the utilization of the MediaPipe library, created by Google in C++, with an additional Python interface available as a binding. TECHNICAL NOTE: The advent of FaceMesh coupled with artificial intelligence (AI), has brought about a transformative wave in contemporary maxillofacial surgery. This cutting-edge deep neural network, seamlessly integrated with Virtual Surgical Planning (VSP), offers surgeons precise 4D facial mapping capabilities. It accurately identifies facial landmarks, tailoring surgical interventions to individual patients, and streamlining the overall surgical procedure. CONCLUSION: FaceMesh emerges as a revolutionary tool in modern maxillofacial surgery. This deep neural network empowers surgeons with detailed insights into facial morphology, aiding in personalized interventions and optimizing surgical outcomes. The real-time assessment of facial dynamics contributes to improved aesthetic and functional results, particularly in complex cases like facial asymmetries or reconstructions. Additionally, FaceMesh has the potential for early detection of medical conditions and disease prediction, further enhancing patient care. Ongoing refinement and validation are essential to address limitations and ensure the reliability and effectiveness of FaceMesh in clinical settings.

6.
J Med Internet Res ; 26: e42904, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38477981

RESUMO

BACKGROUND: While characteristic facial features provide important clues for finding the correct diagnosis in genetic syndromes, valid assessment can be challenging. The next-generation phenotyping algorithm DeepGestalt analyzes patient images and provides syndrome suggestions. GestaltMatcher matches patient images with similar facial features. The new D-Score provides a score for the degree of facial dysmorphism. OBJECTIVE: We aimed to test state-of-the-art facial phenotyping tools by benchmarking GestaltMatcher and D-Score and comparing them to DeepGestalt. METHODS: Using a retrospective sample of 4796 images of patients with 486 different genetic syndromes (London Medical Database, GestaltMatcher Database, and literature images) and 323 inconspicuous control images, we determined the clinical use of D-Score, GestaltMatcher, and DeepGestalt, evaluating sensitivity; specificity; accuracy; the number of supported diagnoses; and potential biases such as age, sex, and ethnicity. RESULTS: DeepGestalt suggested 340 distinct syndromes and GestaltMatcher suggested 1128 syndromes. The top-30 sensitivity was higher for DeepGestalt (88%, SD 18%) than for GestaltMatcher (76%, SD 26%). DeepGestalt generally assigned lower scores but provided higher scores for patient images than for inconspicuous control images, thus allowing the 2 cohorts to be separated with an area under the receiver operating characteristic curve (AUROC) of 0.73. GestaltMatcher could not separate the 2 classes (AUROC 0.55). Trained for this purpose, D-Score achieved the highest discriminatory power (AUROC 0.86). D-Score's levels increased with the age of the depicted individuals. Male individuals yielded higher D-scores than female individuals. Ethnicity did not appear to influence D-scores. CONCLUSIONS: If used with caution, algorithms such as D-score could help clinicians with constrained resources or limited experience in syndromology to decide whether a patient needs further genetic evaluation. Algorithms such as DeepGestalt could support diagnosing rather common genetic syndromes with facial abnormalities, whereas algorithms such as GestaltMatcher could suggest rare diagnoses that are unknown to the clinician in patients with a characteristic, dysmorphic face.


Assuntos
Algoritmos , Benchmarking , Humanos , Feminino , Masculino , Estudos Retrospectivos , Área Sob a Curva , Computadores
7.
J Imaging ; 10(3)2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38535139

RESUMO

Personal privacy protection has been extensively investigated. The privacy protection of face recognition applications combines face privacy protection with face recognition. Traditional face privacy-protection methods encrypt or perturb facial images for protection. However, the original facial images or parameters need to be restored during recognition. In this paper, it is found that faces can still be recognized correctly when only some of the high-order and local feature information from faces is retained, while the rest of the information is fuzzed. Based on this, a privacy-preserving face recognition method combining random convolution and self-learning batch normalization is proposed. This method generates a privacy-preserved scrambled facial image and an image fuzzy degree that is close to an encryption of the image. The server directly recognizes the scrambled facial image, and the recognition accuracy is equivalent to that of the normal facial image. The method ensures the revocability and irreversibility of the privacy preserving of faces at the same time. In this experiment, the proposed method is tested on the LFW, Celeba, and self-collected face datasets. On the three datasets, the proposed method outperforms the existing face privacy-preserving recognition methods in terms of face visual information elimination and recognition accuracy. The recognition accuracy is >99%, and the visual information elimination is close to an encryption effect.

8.
J Integr Neurosci ; 23(3): 48, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38538212

RESUMO

In the context of perceiving individuals within and outside of social groups, there are distinct cognitive processes and mechanisms in the brain. Extensive research in recent years has delved into the neural mechanisms that underlie differences in how we perceive individuals from different social groups. To gain a deeper understanding of these neural mechanisms, we present a comprehensive review from the perspectives of facial recognition and memory, intergroup identification, empathy, and pro-social behavior. Specifically, we focus on studies that utilize functional magnetic resonance imaging (fMRI) and event-related potential (ERP) techniques to explore the relationship between brain regions and behavior. Findings from fMRI studies reveal that the brain regions associated with intergroup differentiation in perception and behavior do not operate independently but instead exhibit dynamic interactions. Similarly, ERP studies indicate that the amplitude of neural responses shows various combinations in relation to perception and behavior.


Assuntos
Empatia , Reconhecimento Facial , Humanos , Imageamento por Ressonância Magnética , Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Mapeamento Encefálico , Comportamento Social
9.
Math Biosci Eng ; 21(3): 4165-4186, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38549323

RESUMO

In recent years, the extensive use of facial recognition technology has raised concerns about data privacy and security for various applications, such as improving security and streamlining attendance systems and smartphone access. In this study, a blockchain-based decentralized facial recognition system (DFRS) that has been designed to overcome the complexities of technology. The DFRS takes a trailblazing approach, focusing on finding a critical balance between the benefits of facial recognition and the protection of individuals' private rights in an era of increasing monitoring. First, the facial traits are segmented into separate clusters which are maintained by the specialized node that maintains the data privacy and security. After that, the data obfuscation is done by using generative adversarial networks. To ensure the security and authenticity of the data, the facial data is encoded and stored in the blockchain. The proposed system achieves significant results on the CelebA dataset, which shows the effectiveness of the proposed approach. The proposed model has demonstrated enhanced efficacy over existing methods, attaining 99.80% accuracy on the dataset. The study's results emphasize the system's efficacy, especially in biometrics and privacy-focused applications, demonstrating outstanding precision and efficiency during its implementation. This research provides a complete and novel solution for secure facial recognition and data security for privacy protection.


Assuntos
Blockchain , Aprendizado Profundo , Reconhecimento Facial , Humanos , Privacidade , Fenótipo
10.
Neuroimage ; 291: 120591, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38552812

RESUMO

Functional imaging has helped to understand the role of the human insula as a major processing network for integrating input with the current state of the body. However, these studies remain at a correlative level. Studies that have examined insula damage show lesion-specific performance deficits. Case reports have provided anecdotal evidence for deficits following insula damage, but group lesion studies offer a number of advances in providing evidence for functional representation of the insula. We conducted a systematic literature search to review group studies of patients with insula damage after stroke and identified 23 studies that tested emotional processing performance in these patients. Eight of these studies assessed emotional processing of visual (most commonly IAPS), auditory (e.g., prosody), somatosensory (emotional touch) and autonomic function (heart rate variability). Fifteen other studies looked at social processing, including emotional face recognition, gaming tasks and tests of empathy. Overall, there was a bias towards testing only patients with right-hemispheric lesions, making it difficult to consider hemisphere specificity. Although many studies included an overlay of lesion maps to characterise their patients, most did not differentiate lesion statistics between insula subunits and/or applied voxel-based associations between lesion location and impairment. This is probably due to small group sizes, which limit statistical comparisons. We conclude that multicentre analyses of lesion studies with comparable patients and performance tests are needed to definitively test the specific function of parts of the insula in emotional processing and social interaction.


Assuntos
Reconhecimento Facial , Acidente Vascular Cerebral , Humanos , Imageamento por Ressonância Magnética/métodos , Emoções/fisiologia , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/diagnóstico por imagem , Empatia , Mapeamento Encefálico/métodos
11.
Front Psychol ; 15: 1204204, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38344279

RESUMO

Introduction: Emotion processing is an essential part of interpersonal relationships and social interactions. Changes in emotion processing have been found in both mood disorders and in aging, however, the interaction between such factors has yet to be examined in detail. This is of interest due to the contrary nature of the changes observed in existing research - a negativity bias in mood disorders versus a positivity effect with aging. It is also unclear how changes in non-emotional cognitive function with aging and in mood disorders, interact with these biases. Methods and results: In individuals with mood disorders and in healthy control participants, we examined emotional processing and its relationship to age in detail. Data sets from two studies examining facial expression recognition were pooled. In one study, 98 currently depressed individuals (either unipolar or bipolar) were compared with 61 healthy control participants, and in the other, 100 people with bipolar disorder (in various mood states) were tested on the same facial expression recognition task. Repeated measures analysis of variance was used to examine the effects of age and mood disorder diagnosis alongside interactions between individual emotion, age, and mood disorder diagnosis. A positivity effect was associated with increasing age which was evident irrespective of the presence of mood disorder or current mood episode. Discussion: Results suggest a positivity effect occurring at a relatively early age but with no evidence of a bias toward negative emotions in mood disorder or specifically, in depressed episodes. The positivity effect in emotional processing in aging appears to occur even within people with mood disorders. Further research is needed to understand how this fits with negative biases seen in previous studies in mood disorders.

12.
JMIR Serious Games ; 12: e52661, 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38265856

RESUMO

This research letter presents the co-design process for RG4Face, a mime therapy-based serious game that uses computer vision for human facial movement recognition and estimation to help health care professionals and patients in the facial rehabilitation process.

13.
Med Sci Law ; : 258024241227717, 2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38263636

RESUMO

The face is the most essential part of the human body, and because of its distinctive traits, it is crucial for recognizing people. Facial recognition technology (FRT) is one of the most successful and fascinating technologies of the modern times. The world is moving towards contactless FRT after the COVID-19 pandemic. Due to its contactless biometric characteristics, FRT is becoming quite popular worldwide. Businesses are replacing conventional fingerprint scanners with artificial intelligence-based FRT, opening up enormous commercial prospects. Security and surveillance, authentication/access control systems, digital healthcare, photo retrieval, etc., are some sectors where its use has become essential. In the present communication, we presented the global adoption of FRT, its rising trend in the market, utilization of the technology in various sectors, its challenges and rising concerns with special reference to India and worldwide.

14.
Risk Anal ; 44(4): 958-971, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37496473

RESUMO

AI thermal facial recognition (AITFR) has been rapidly applied globally in the fight against Coronavirus disease 2019 (COVID-19). However, AITFR has also been accompanied by a controversy regarding whether the public accepts it. Therefore, it is necessary to assess the acceptance of AITFR during the COVID-19 crisis. Drawing upon the theory of acceptable risk and Siegrist's causal model of public acceptance (PA), we built a combined psychological model that included the perceived severity of COVID-19 (PSC) to describe the influencing factors and pathways of AITFR acceptance. This model was verified through a survey conducted in Xi'an City, Shaanxi Province, China, which collected 754 valid questionnaires. The results show that (1) COVID-19 provides various application scenarios for AI-related technologies. However, the respondents' trust in AITFR was found to be very low. Additionally, the public appeared concerned about the privacy disclosure issue and the accuracy of the AITFR algorithm. (2) The PSC, social trust (ST), and perceived benefit (PB) were found to directly affect AITFR acceptance. (3) The PSC was found to have a significant positive effect on perceived risk (PR). PR was found to have no significant effect on PA, which is inconsistent with the findings of previous studies. (4) The PB were found to be a stronger mediator of the indirect effect of the PSC on ST induced by AITFR acceptance.


Assuntos
COVID-19 , Reconhecimento Facial , Humanos , Confiança , Modelos Psicológicos , Inteligência Artificial
15.
J Clin Monit Comput ; 38(2): 261-270, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38150126

RESUMO

PURPOSE: This study aimed to assess whether an artificial intelligence model based on facial expressions can accurately predict significant postoperative pain. METHODS: A total of 155 facial expressions from patients who underwent gastric cancer surgery were analyzed to extract facial action units (AUs), gaze, landmarks, and positions. These features were used to construct various machine learning (ML) models, designed to predict significant postoperative pain intensity (NRS ≥ 7) from less significant pain (NRS < 7). Significant AUs predictive of NRS ≥ 7 were determined and compared to AUs known to be associated with pain in awake patients. The area under the receiver operating characteristic curves (AUROCs) of the ML models was calculated and compared using DeLong's test. RESULTS: AU17 (chin raising) and AU20 (lip stretching) were found to be associated with NRS ≥ 7 (both P ≤ 0.004). AUs known to be associated with pain in awake patients did not show an association with pain in postoperative patients. An ML model based on AU17 and AU20 demonstrated an AUROC of 0.62 for NRS ≥ 7, which was inferior to a model based on all AUs (AUROC = 0.81, P = 0.006). Among facial features, head position and facial landmarks proved to be better predictors of NRS ≥ 7 (AUROC, 0.85-0.96) than AUs. A merged ML model that utilized gaze and eye landmarks, as well as head position and facial landmarks, exhibited the best performance (AUROC, 0.90) in predicting significant postoperative pain. CONCLUSION: ML models using facial expressions can accurately predict the presence of significant postoperative pain and have the potential to screen patients in need of rescue analgesia. TRIAL REGISTRATION NUMBER: This study was registered at ClinicalTrials.gov (NCT05477303; date: June 17, 2022).


Assuntos
Inteligência Artificial , Expressão Facial , Humanos , Face , Dor Pós-Operatória/diagnóstico , Projetos Piloto
16.
J Med Imaging (Bellingham) ; 10(6): 066501, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38074629

RESUMO

Purpose: Previous studies have demonstrated that three-dimensional (3D) volumetric renderings of magnetic resonance imaging (MRI) brain data can be used to identify patients using facial recognition. We have shown that facial features can be identified on simulation-computed tomography (CT) images for radiation oncology and mapped to face images from a database. We aim to determine whether CT images can be anonymized using anonymization software that was designed for T1-weighted MRI data. Approach: Our study examines (1) the ability of off-the-shelf anonymization algorithms to anonymize CT data and (2) the ability of facial recognition algorithms to identify whether faces could be detected from a database of facial images. Our study generated 3D renderings from 57 head CT scans from The Cancer Imaging Archive database. Data were anonymized using AFNI (deface, reface, and 3Dskullstrip) and FSL's BET. Anonymized data were compared to the original renderings and passed through facial recognition algorithms (VGG-Face, FaceNet, DLib, and SFace) using a facial database (labeled faces in the wild) to determine what matches could be found. Results: Our study found that all modules were able to process CT data and that AFNI's 3Dskullstrip and FSL's BET data consistently showed lower reidentification rates compared to the original. Conclusions: The results from this study highlight the potential usage of anonymization algorithms as a clinical standard for deidentifying brain CT data. Our study demonstrates the importance of continued vigilance for patient privacy in publicly shared datasets and the importance of continued evaluation of anonymization methods for CT data.

17.
Dement Neurocogn Disord ; 22(4): 158-168, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38025409

RESUMO

Background and Purpose: Facial emotion recognition deficits impact the daily life, particularly of Alzheimer's disease patients. We aimed to assess these deficits in the following three groups: subjective cognitive decline (SCD), mild cognitive impairment (MCI), and mild Alzheimer's dementia (AD). Additionally, we explored the associations between facial emotion recognition and cognitive performance. Methods: We used the Korean version of the Florida Facial Affect Battery (K-FAB) in 72 SCD, 76 MCI, and 76 mild AD subjects. The comparison was conducted using the analysis of covariance (ANCOVA), with adjustments being made for age and sex. The Mini-Mental State Examination (MMSE) was utilized to gauge the overall cognitive status, while the Seoul Neuropsychological Screening Battery (SNSB) was employed to evaluate the performance in the following five cognitive domains: attention, language, visuospatial abilities, memory, and frontal executive functions. Results: The ANCOVA results showed significant differences in K-FAB subtests 3, 4, and 5 (p=0.001, p=0.003, and p=0.004, respectively), especially for anger and fearful emotions. Recognition of 'anger' in the FAB subtest 5 declined from SCD to MCI to mild AD. Correlations were observed with age and education, and after controlling for these factors, MMSE and frontal executive function were associated with FAB tests, particularly in the FAB subtest 5 (r=0.507, p<0.001 and r=-0.288, p=0.026, respectively). Conclusions: Emotion recognition deficits worsened from SCD to MCI to mild AD, especially for negative emotions. Complex tasks, such as matching, selection, and naming, showed greater deficits, with a connection to cognitive impairment, especially frontal executive dysfunction.

19.
Plast Surg (Oakv) ; 31(4): 321-329, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37915352

RESUMO

Introduction: Multiple tools have been developed for facial feature measurements and analysis using facial recognition machine learning techniques. However, several challenges remain before these will be useful in the clinical context for reconstructive and aesthetic plastic surgery. Smartphone-based applications utilizing open-access machine learning tools can be rapidly developed, deployed, and tested for use in clinical settings. This research compares a smartphone-based facial recognition algorithm to direct and digital measurement performance for use in facial analysis. Methods: Facekit is a camera application developed for Android that utilizes ML Kit, an open-access computer vision Application Programing Interface developed by Google. Using the facial landmark module, we measured 4 facial proportions in 15 healthy subjects and compared them to direct surface and digital measurements using intraclass correlation (ICC) and Pearson correlation. Results: Measurement of the naso-facial proportion achieved the highest ICC of 0.321, where ICC > 0.75 is considered an excellent agreement between methods. Repeated measures analysis of variance of proportion measurements between ML Kit, direct and digital methods, were significantly different (F[2,14] = 6-26, P<<.05). Facekit measurements of orbital, orbitonasal, naso-oral, and naso-facial ratios had overall low correlation and agreement to both direct and digital measurements (R<<0.5, ICC<<0.75). Conclusion: Facekit is a smartphone camera application for rapid facial feature analysis. Agreement between Facekit's machine learning measurements and direct and digital measurements was low. We conclude that the chosen pretrained facial recognition software is not accurate enough for conducting a clinically useful facial analysis. Custom models trained on accurate and clinically relevant landmarks may provide better performance.


Introduction : Il existe de multiples outils pour procéder aux mesures et à l'analyse des caractéristiques faciales à l'aide des techniques d'apprentissage machine de la reconnaissance faciale. Cependant, il reste plusieurs défis à relever avant qu'ils soient utiles en contexte clinique de chirurgie reconstructive et de chirurgie plastique. Des applications pour téléphone intelligent faisant appel à des outils d'apprentissage machine en libre accès peuvent être rapidement mises au point, déployées et mises à l'essai dans un cadre clinique. Dans la présente étude, les chercheurs comparent un algorithme de reconnaissance faciale sur téléphone intelligent pour effectuer les mesures directes et numériques nécessaires lors de l'analyse faciale. Méthodologie : Facekit est une application pour appareil photo de téléphone Android qui fait appel à ML Kit, une application de vision par ordinateur en libre accès créée par Google. Au moyen du module de repères faciaux, les chercheurs ont mesuré quatre proportions faciales chez 15 sujets en santé et les ont comparées aux mesures de surface directe et aux mesures numériques à l'aide de la corrélation intraclasse et de la corrélation de Pearson. Résultats : La mesure de la proportion nasofaciale a obtenu le coefficient de corrélation intraclasse (CCI) le plus élevé, à 0,321, où un CCI supérieur à 0,75 est considéré comme une excellente corrélation entre les méthodes. Des analyses de variance répétées des mesures de proportion entre le ML Kit, la méthode directe et la méthode numérique différaient considérablement (F[2,14] = 6 à 26, p<<0,05). Les mesures Facekit des ratios entre les mesures orbitale, orbitonasale, naso-orale et naso-faciale avaient une faible corrélation globale et étaient corrélées avec les mesures directes et numériques (R<<0,5, CCI<<0,75). Conclusion : Facekit est une application pour appareil photo de téléphone intelligent visant à analyser rapidement les caractéristiques faciales. La concordance entre les mesures d'apprentissage machine de Facekit et les mesures directes et numériques était faible. Les chercheurs concluent que le logiciel de reconnaissance faciale préentraîné n'est pas assez précis pour procéder à une analyse faciale utile sur le plan clinique. Des modèles personnalisés formés à des repères précis et pertinents sur le plan clinique donneront peut-être un meilleur rendement.

20.
Psychon Bull Rev ; 2023 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-37904006

RESUMO

Researchers in cognitive and forensic psychology have long been interested in the impact of individual differences on eyewitness memory. The sex of the eyewitness is one such factor, with a body of research spanning over 50 years that has sought to determine if and how eyewitness memory differs between males and females. This research has significant implications across the criminal justice system, particularly in the context of gendered issues such as sexual assault. However, the findings have been inconsistent, and there is still a lack of consensus across the literature. A scoping review and analysis of the literature was performed to examine the available evidence regarding whether sex differences in eyewitness memory exist, what explanations have been proposed for any differences found, and how this research has been conducted. Through a strategic search of seven databases, 22 relevant articles were found and reviewed. Results demonstrated that despite the mixed nature of the methodologies and findings, the research suggests that neither males nor females have superior performance in the total amount of accurate information reported, but rather that females may have better memory for person-related details while males may perform better for details related to the surrounding environment. There was also consistent evidence for the own-gender bias. There was some consensus that differences in selective attention between males and females may underlie these sex differences in eyewitness memory. However, none of the studies directly tested this suggested attentional factor, and thus future research is needed to investigate this using a more systematic and empirical approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...